Trusted Sources

On the faint hope that networks of trust might emerge from the noise and restore some sense of orientation

1066 words

v1

Originally published on eighttrigrams.substack.com on February 18th, 2026

This is a topic I’ve wanted to comment on for a long time but could never quite tie together. So what follows is rough around the edges — but I believe it provides value in itself, because the observation describes a real trend.

There was once a book — and like so many books, I only read part of it — but I’ve always found its proposition fascinating. It was The Human Swarm by Mark Moffett. It deals with a curious fact about humans: we are unique among social animals in that we can navigate daily life surrounded by anonymous strangers without freaking out. For most social species, an unfamiliar individual is a potential grave threat to the group. On the other end of the spectrum, there’s the hive mind of ants — but surely we don’t want to compare ourselves to that. After all, the bugs in Starship Troopers were the enemy. Anyway, the point stands: we are social animals who can handle strangers. We sit next to them on the tram to work, we meet friends in a café surrounded by people we don’t know.

The internet is a strange place in this regard. We can interact with virtually anyone — at least anyone who’s also online. We can peek into an enormously varied world of content across video platforms. Sure, there are parasocial relationships that people build with their influencers of choice. Sure, a subset of people actively engage with others on social media — they form networks, slide into DMs, and sometimes genuine human relationships come out of it. But in the vast majority of cases, I’d argue, nothing of the sort happens. Instead, people passively consume whatever piques their interest in the moment.


source

What irks me particularly about recommender systems is that they both feed into this dynamic and feed off of it. Instead of building relationships and forming knowledge, what gets activated is what’s sometimes called the monkey mind — your restless attention driven haywire by ever more sugar for an already unruly animal. This is where hyper-individualism and capitalism culminate: in the very existence of recommender systems, especially those of video platforms, and especially those serving short-form content. And from an accelerationist perspective, I’m actually quite pleased that this tendency now reaches its logical conclusion with the advent of AI slop.


…. even if it’s good slop. We’re literally drowning in content. there is just to much of virtually everything (or rather everything virtual).

I don’t have high hopes that things will go my way here, but as a human you never lose hope entirely. What I hope is that the infinitely low cost of producing any type of content will eventually force us to interact mostly with material and sources we have a direct relationship to — for whatever reason, but especially because we know the creator and how the work came about. In short, I wish we’d consume only material that is worth our attention.

With all that said, here are some links I hope will substantiate the point.


This cropped up a while ago — it’s about how Ted Gioia, a well-known music and art critic, had an AI-written article in his “best reads of 2025” list.

Alberto Romero I Liked the Essay. Then I Found Out It Was AI

This gives a new quality to the old “can you separate the author from the work” question. I actually used to side with yes, you can — but I don’t think that anymore. Not only because of AI, but also because of it. If the essay is good and you enjoy it, then so what? So what, indeed … But the logical endpoint of that reasoning somehow leads to plugging yourself fully into devices that give you infinite pleasure across all senses. I found it entertaining in high school philosophy class to defend that position, but the wisdom of age — and the actual possibility of such things — have me say: no.



Speaks for itself, I think. Source: simonwillison.net.


Jayson Fritz-Stibbe Someone is Using AI to Exploit Lonely Writers on Substack

An excerpt:

The formula is simple. Automate a process to find new posts from small accounts, then run the post through an LLM, asking it to

  1. give brief, encouraging feedback,

  2. compliment their framing,

  3. appeal to personal authority, and

  4. use a casual tone (with 1 or 2 spelling mistakes).

This account is offering the false promise of human conversation, then delivering it by running a script. They are socially engineering people who are just trying to converse about subjects that matter to them – tricking them into thinking that the Substack algorithm is working for them. A bigger account found their article, took the time to read it, and left them a kind note.

They are using AI to farm parasocial relationships, crop dusting new accounts with the fleeting attention of a God-damned machine.


In these hype-cycle-ridden times it always seems a little odd to call something a “Manifesto” — but then again, who am I to talk, someone who writes “Whitepapers” as if anyone would care. :D


resonantcomputing.org

I found this a little bit endearing. Although I don’t think this has or will gain traction. it expresses a yearning, to be sure, to get back to something real, and tries to interpret the inevitable as a chance. Ok that was a bit cynical, I admit it. My critique — not criticism — is that it is way to tame. It points at something vital, but is not self-conscious enough to actually fully admit to it. Anyways, it mentions Christopher Alexander, and Alexander has clearly put the finger into that wound with his work, so I suggest people interested in this follow that lead.


I only read Snowcrash by Neal Stephenson and didn’t dig it too much, bo be honest, but this concept of ‘Phyles’ keeps cropping up, and contrarian investor Doug Casey whom I’ve been following for a long time also is quite fond of this. Maybe I finally give Diamond Age a shot at some point.


I am confident by now you’ve spotted a theme here.

# Comments

Leave a comment